Finetuning language models on a collection of datasets phrased as instructions has been shown to improve model performance and generalization to unseen tasks. In this paper we explore instruction finetuning with a particular focus on (1) scaling the number of tasks, (2) scaling the model size, and (3) finetuning on chain-of-thought data. We find that instruction finetuning with the above aspects dramatically improves performance on a variety of model classes (PaLM, T5, U-PaLM), prompting setups (zero-shot, few-shot, CoT), and evaluation benchmarks (MMLU, BBH, TyDiQA, MGSM, open-ended generation). For instance, Flan-PaLM 540B instruction-finetuned on 1.8K tasks outperforms PALM 540B by a large margin (+9.4% on average). Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints, which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
translated by 谷歌翻译
We provide results that exactly quantify how data augmentation affects the convergence rate and variance of estimates. They lead to some unexpected findings: Contrary to common intuition, data augmentation may increase rather than decrease the uncertainty of estimates, such as the empirical prediction risk. Our main theoretical tool is a limit theorem for functions of randomly transformed, high-dimensional random vectors. The proof draws on work in probability on noise stability of functions of many variables. The pathological behavior we identify is not a consequence of complex models, but can occur even in the simplest settings -- one of our examples is a ridge regressor with two parameters. On the other hand, our results also show that data augmentation can have real, quantifiable benefits.
translated by 谷歌翻译
We develop a wall model for large-eddy simulation (LES) that takes into account various pressure-gradient effects using multi-agent reinforcement learning (MARL). The model is trained using low-Reynolds-number flow over periodic hills with agents distributed on the wall along the computational grid points. The model utilizes a wall eddy-viscosity formulation as the boundary condition, which is shown to provide better predictions of the mean velocity field, rather than the typical wall-shear stress formulation. Each agent receives states based on local instantaneous flow quantities at an off-wall location, computes a reward based on the estimated wall-shear stress, and provides an action to update the wall eddy viscosity at each time step. The trained wall model is validated in wall-modeled LES (WMLES) of flow over periodic hills at higher Reynolds numbers, and the results show the effectiveness of the model on flow with pressure gradients. The analysis of the trained model indicates that the model is capable of distinguishing between the various pressure gradient regimes present in the flow.
translated by 谷歌翻译
Artificial Intelligence (AI) is having a tremendous impact across most areas of science. Applications of AI in healthcare have the potential to improve our ability to detect, diagnose, prognose, and intervene on human disease. For AI models to be used clinically, they need to be made safe, reproducible and robust, and the underlying software framework must be aware of the particularities (e.g. geometry, physiology, physics) of medical data being processed. This work introduces MONAI, a freely available, community-supported, and consortium-led PyTorch-based framework for deep learning in healthcare. MONAI extends PyTorch to support medical data, with a particular focus on imaging, and provide purpose-specific AI model architectures, transformations and utilities that streamline the development and deployment of medical AI models. MONAI follows best practices for software-development, providing an easy-to-use, robust, well-documented, and well-tested software framework. MONAI preserves the simple, additive, and compositional approach of its underlying PyTorch libraries. MONAI is being used by and receiving contributions from research, clinical and industrial teams from around the world, who are pursuing applications spanning nearly every aspect of healthcare.
translated by 谷歌翻译
当前的领先错误发音检测和诊断(MDD)系统通过端到端音素识别实现有希望的性能。这种端到端解决方案的一个挑战是在自然L2语音上缺乏人类注销的音素。在这项工作中,我们通过伪标记(PL)程序利用未标记的L2语音,并扩展基于预先训练的自我监督学习(SSL)模型的微调方法。具体而言,我们使用WAV2VEC 2.0作为我们的SSL模型,并使用原始标记的L2语音样本以及创建的伪标记的L2语音样本进行微调。我们的伪标签是动态的,是由在线模型的合奏生成的,这确保了我们的模型对伪标签的噪声具有强大的功能。我们表明,使用伪标签进行微调可实现5.35%的音素错误率降低和2.48%的MDD F1得分在仅标签样本的基线基线。提出的PL方法还显示出优于常规的离线PL方法。与最先进的MDD系统相比,我们的MDD解决方案会产生更准确,一致的语音误差诊断。此外,我们对单独的UTD-4ACCENTS数据集进行了开放测试,在该数据集中,我们的系统识别输出基于重音和清晰度,与人类感知有着密切的相关性。
translated by 谷歌翻译
高度动态的移动ad-hoc网络(MANET)仍然是开发和部署强大,高效和可扩展的路由协议的最具挑战性环境之一。在本文中,我们提出了DeepCQ +路由协议,以一种新颖的方式将新兴的多代理深度增强学习(Madrl)技术集成到现有的基于Q学习的路由协议及其变体中,并在各种拓扑结构中实现了持续更高的性能和移动配置。在保持基于Q学习的路由协议的整体协议结构的同时,DeepCQ +通过精心设计的Madrl代理替换静态配置的参数化阈值和手写规则,使得不需要这些参数的配置。广泛的模拟表明,与其基于Q学习的对应物相比,DeptCQ +产生的端到端吞吐量显着增加了端到端延迟(跳数)的明显劣化。在定性方面,也许更重要的是,Deepcq +在许多情况下维持了非常相似的性能提升,即在网络尺寸,移动条件和交通动态方面没有接受过培训。据我们所知,这是Madrl框架的第一次成功应用MANET路由问题,即使在训练有素的场景范围之外的环境中,即使在训练范围之外的环境中也能够高度的可扩展性和鲁棒性。这意味着我们的基于Marl的DeepCQ +设计解决方案显着提高了基于Q学习的CQ +基线方法的性能,以进行比较,并提高其实用性和解释性,因为现实世界的MANET环境可能会在训练范围的MANET场景之外变化。讨论了进一步提高性能和可扩展性的增益的额外技术。
translated by 谷歌翻译
我们介绍了ThreedWorld(TDW),是交互式多模态物理模拟的平台。 TDW能够模拟高保真感官数据和富裕的3D环境中的移动代理和对象之间的物理交互。独特的属性包括:实时近光 - 真实图像渲染;对象和环境库,以及他们定制的例程;有效构建新环境课程的生成程序;高保真音频渲染;各种材料类型的现实物理相互作用,包括布料,液体和可变形物体;可定制的代理体现AI代理商;并支持与VR设备的人类交互。 TDW的API使多个代理能够在模拟中进行交互,并返回一系列表示世界状态的传感器和物理数据。我们在计算机视觉,机器学习和认知科学中的新兴的研究方向上提供了通过TDW的初始实验,包括多模态物理场景理解,物理动态预测,多代理交互,像孩子一样学习的模型,并注意研究人类和神经网络。
translated by 谷歌翻译
Array programming provides a powerful, compact, expressive syntax for accessing, manipulating, and operating on data in vectors, matrices, and higher-dimensional arrays [1]. NumPy is the primary array programming library for the Python language [2,3,4,5]. It plays an essential role in research analysis pipelines in fields as diverse as physics, chemistry, astronomy, geoscience, biology, psychology, material science, engineering, finance, and economics. For example, in astronomy, NumPy was an important part of the software stack used in the discovery of gravitational waves [6] and the first imaging of a black hole [7].Here we show how a few fundamental array concepts lead to a simple and powerful programming paradigm for organizing, exploring, and analyzing scientific data. NumPy is the foundation upon which the entire scientific Python universe is constructed. It is so pervasive that several projects, targeting audiences with specialized needs, have developed their own NumPy-like interfaces and array objects. Because of its central position in the ecosystem, NumPy increasingly plays the role of an interoperability layer between these new array computation libraries.
translated by 谷歌翻译
Many different machine learning algorithms exist; taking into account each algorithm's hyperparameters, there is a staggeringly large number of possible alternatives overall. We consider the problem of simultaneously selecting a learning algorithm and setting its hyperparameters, going beyond previous work that addresses these issues in isolation. We show that this problem can be addressed by a fully automated approach, leveraging recent innovations in Bayesian optimization. Specifically, we consider a wide range of feature selection techniques (combining 3 search and 8 evaluator methods) and all classification approaches implemented in WEKA, spanning 2 ensemble methods, 10 meta-methods, 27 base classifiers, and hyperparameter settings for each classifier. On each of 21 popular datasets from the UCI repository, the KDD Cup 09, variants of the MNIST dataset and CIFAR-10, we show classification performance often much better than using standard selection/hyperparameter optimization methods. We hope that our approach will help non-expert users to more effectively identify machine learning algorithms and hyperparameter settings appropriate to their applications, and hence to achieve improved performance.
translated by 谷歌翻译
In this paper, we propose DimonGen, which aims to generate diverse sentences describing concept relationships in various everyday scenarios. To support this, we create a benchmark dataset for this task by adapting the existing CommonGen dataset and propose a two-stage model called MoREE (Mixture of Retrieval-Enhanced Experts) to generate the target sentences. MoREE consists of a mixture of retriever models that retrieve diverse context sentences related to the given concepts, and a mixture of generator models that generate diverse sentences based on the retrieved contexts. We conduct experiments on the DimonGen task and show that MoREE outperforms strong baselines in terms of both the quality and diversity of the generated sentences. Our results demonstrate that MoREE is able to generate diverse sentences that reflect different relationships between concepts, leading to a comprehensive understanding of concept relationships.
translated by 谷歌翻译